- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources3
- Resource Type
-
0002000001000000
- More
- Availability
-
30
- Author / Contributor
- Filter by Author / Creator
-
-
Yu, Xinyan (3)
-
Hajishirzi, Hannaneh (2)
-
Asai, Akari (1)
-
Bazilinskyy, Pavlo (1)
-
Blevins, Terra (1)
-
Dey, Debargha (1)
-
Gonen, Hila (1)
-
Kudugunta, Sneha (1)
-
Martens, Marieke (1)
-
Min, Sewon (1)
-
Parker, Callum (1)
-
Reid, Machel (1)
-
Ruder, Sebastian (1)
-
Tomitsch, Martin (1)
-
Tran, Tram_Thi Minh (1)
-
Tsvetkov, Yulia (1)
-
Zettlemoyer, Luke (1)
-
#Tyler Phillips, Kenneth E. (0)
-
#Willis, Ciara (0)
-
& Abreu-Ramos, E. D. (0)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Despite remarkable advancements in few-shot generalization in natural language processing, most models are developed and evaluated primarily in English. To establish a rigorous and equitable evaluation framework for few-shot cross-lingual transfer, we introduce a new benchmark, called BUFFET, which unifies 15 diverse tasks across 54 languages in a sequence-to-sequence format and provides a fixed set of few-shot examples and instructions. Using BUFFET, we perform thorough evaluations of ten state-of-the-art multilingual large language models with different transfer methods, namely in-context learning and fine-tuning. Our findings reveal significant room for improvement in few-shot in-context cross-lingual transfer. Strong multilingual pre-trained or instruction-tuned models such as BLOOM or ChatGPT often lag behind much smaller mT5-base models given the same number of few-shot samples, particularly in low-resource languages. Our analysis suggests avenues for future research in few-shot cross-lingual transfer.more » « less
-
Tran, Tram_Thi Minh; Parker, Callum; Yu, Xinyan; Dey, Debargha; Martens, Marieke; Bazilinskyy, Pavlo; Tomitsch, Martin (, Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies)With the rise of autonomous vehicles (AVs) in transportation, a pressing concern is their seamless integration into daily life. In multi-pedestrian settings, two challenges emerge: ensuring unambiguous communication to individual pedestrians via external Human-Machine Interfaces (eHMIs), and the influence of one pedestrian over another. We conducted an experiment (N=25) using a multi-pedestrian virtual reality simulator. Participants were paired and exposed to three distinct eHMI concepts: on the vehicle, within the surrounding infrastructure, and on the pedestrian themselves, against a baseline without any eHMI. Results indicate that all eHMI concepts improved clarity of communication over the baseline, but differences in their effectiveness were observed. While pedestrian and infrastructure communications often provided more direct clarity, vehicle-based cues at times introduced uncertainty elements. Furthermore, the study identified the role of co-located pedestrians: in the absence of clear AV communication, individuals frequently sought cues from their peers.more » « less
-
Yu, Xinyan; Min, Sewon; Zettlemoyer, Luke; Hajishirzi, Hannaneh (, ACL)
An official website of the United States government

Full Text Available